Applications of lp-Norms and their Smooth Approximations for Gradient Based Learning Vector Quantization

نویسندگان

  • Mandy Lange
  • Dietlind Zühlke
  • Olaf Holz
  • Thomas Villmann
چکیده

Learning vector quantization applying non-standard metrics became quite popular for classification performance improvement compared to standard approaches using the Euclidean distance. Kernel metrics and quadratic forms belong to the most promising approaches. In this paper we consider Minkowski distances (lp-norms). In particular, l1-norms are known to be robust against noise in data, such that, if this structural knowledge is available in advance about the data, this norm should be utilized. However, application in gradient based learning algorithms based on distance evaluations need to calculate the respective derivatives. Because lp-distance formulas contain the absolute approximations thereof are required. We consider in this paper several approaches for smooth consistent approximations for numerical evaluations and demonstrate the applicability for exemplary real world applications.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Regularization in Relevance Learning Vector Quantization Using l one Norms

We propose in this contribution a method for l1-regularization in prototype based relevance learning vector quantization (LVQ) for sparse relevance profiles. Sparse relevance profiles in hyperspectral data analysis fade down those spectral bands which are not necessary for classification. In particular, we consider the sparsity in the relevance profile enforced by LASSO optimization. The latter...

متن کامل

Regularization in relevance learning vector quantization using l1-norms

We propose in this contribution a method for l1-regularization in prototype based relevance learning vector quantization (LVQ) for sparse relevance pro les. Sparse relevance pro les in hyperspectral data analysis fade down those spectral bands which are not necessary for classi cation. In particular, we consider the sparsity in the relevance pro le enforced by LASSO optimization. The latter one...

متن کامل

Image Compression Based on a Novel Fuzzy Learning Vector Quantization Algorithm

We introduce a novel fuzzy learning vector quantization algorithm for image compression. The design procedure of this algorithm encompasses two basic issues. Firstly, a modified objective function of the fuzzy c-means algorithm is reformulated and then is minimized by means of an iterative gradient-descent procedure. Secondly, the training procedure is equipped with a systematic strategy to acc...

متن کامل

Greedy vector quantization

We investigate the greedy version of the L-optimal vector quantization problem for an Rvalued random vector X ∈ L. We show the existence of a sequence (aN )N≥1 such that aN minimizes a 7→ ∥min1≤i≤N−1 |X−ai| ∧ |X−a| ∥∥ Lp (L-mean quantization error at level N induced by (a1, . . . , aN−1, a)). We show that this sequence produces L -rate optimal N -tuples a = (a1, . . . , aN ) (i.e. the L -mean q...

متن کامل

Scalable Support Vector Machine for Semi-supervised Learning

Owing to the prevalence of unlabeled data, semisupervised learning has recently drawn significant attention and has found applicable in many real-world applications. In this paper, we present the so-called Graph-based Semi-supervised Support Vector Machine (gS3VM), a method that leverages the excellent generalization ability of kernel-based method with the geometrical and distributive informati...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2014